首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   87204篇
  免费   1157篇
  国内免费   411篇
工业技术   88772篇
  2021年   84篇
  2020年   69篇
  2019年   67篇
  2018年   14523篇
  2017年   13439篇
  2016年   10032篇
  2015年   655篇
  2014年   316篇
  2013年   439篇
  2012年   3333篇
  2011年   9630篇
  2010年   8441篇
  2009年   5714篇
  2008年   6969篇
  2007年   7979篇
  2006年   280篇
  2005年   1368篇
  2004年   1272篇
  2003年   1294篇
  2002年   666篇
  2001年   173篇
  2000年   237篇
  1999年   110篇
  1998年   109篇
  1997年   86篇
  1996年   93篇
  1995年   65篇
  1994年   72篇
  1993年   66篇
  1992年   64篇
  1991年   52篇
  1990年   40篇
  1989年   47篇
  1988年   49篇
  1987年   37篇
  1986年   39篇
  1985年   43篇
  1984年   30篇
  1968年   44篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1963年   32篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   64篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 281 毫秒
991.
When interacting with source control management system, developers often commit unrelated or loosely related code changes in a single transaction. When analyzing version histories, such tangled changes will make all changes to all modules appear related, possibly compromising the resulting analyses through noise and bias. In an investigation of five open-source Java projects, we found between 7 % and 20 % of all bug fixes to consist of multiple tangled changes. Using a multi-predictor approach to untangle changes, we show that on average at least 16.6 % of all source files are incorrectly associated with bug reports. These incorrect bug file associations seem to not significantly impact models classifying source files to have at least one bug or no bugs. But our experiments show that untangling tangled code changes can result in more accurate regression bug prediction models when compared to models trained and tested on tangled bug datasets—in our experiments, the statistically significant accuracy improvements lies between 5 % and 200 %. We recommend better change organization to limit the impact of tangled changes.  相似文献   
992.
Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.  相似文献   
993.
We are witnessing a significant growth in the number of smartphone users and advances in phone hardware and sensor technology. In conjunction with the popularity of video applications such as YouTube, an unprecedented number of user-generated videos (UGVs) are being generated and consumed by the public, which leads to a Big Data challenge in social media. In a very large video repository, it is difficult to index and search videos in their unstructured form. However, due to recent development, videos can be geo-tagged (e.g., locations from GPS receiver and viewing directions from digital compass) at the acquisition time, which can provide potential for efficient management of video data. Ideally, each video frame can be tagged by the spatial extent of its coverage area, termed Field-Of-View (FOV). This effectively converts a challenging video management problem into a spatial database problem. This paper attacks the challenges of large-scale video data management using spatial indexing and querying of FOVs, especially maximally harnessing the geographical properties of FOVs. Since FOVs are shaped similar to slices of pie and contain both location and orientation information, conventional spatial indexes, such as R-tree, cannot index them efficiently. The distribution of UGVs’ locations is non-uniform (e.g., more FOVs in popular locations). Consequently, even multilevel grid-based indexes, which can handle both location and orientation, have limitations in managing the skewed distribution. Additionally, since UGVs are usually captured in a casual way with diverse setups and movements, no a priori assumption can be made to condense them in an index structure. To overcome the challenges, we propose a class of new R-tree-based index structures that effectively harness FOVs’ camera locations, orientations and view-distances, in tandem, for both filtering and optimization. We also present novel search strategies and algorithms for efficient range and directional queries on our indexes. Our experiments using both real-world and large synthetic video datasets (over 30 years’ worth of videos) demonstrate the scalability and efficiency of our proposed indexes and search algorithms.  相似文献   
994.
Tracking the spatio-temporal activity is highly relevant for domains like security, health, and quality management. Since animal welfare became a topic in politics and legislation locomotion patterns of livestock have received increasing interest. In contrast to the monitoring of pedestrians cattle activity tracking poses special challenges to both sensors and data analysis. Interesting states are not directly observable by a single sensor. In addition, sensors must be accepted by cattle and need to be robust enough to cope with a rough environment. In this article, we introduce the novel combination of heart rate and positioning sensors. Attached to neck and chest they are less interfering than accelerometers at the ankles. Exploiting the potential of such combined sensor system that records locomotion and non-spatial information from the heart rate sensor however is challenging. We introduce a novel two level method for the activity tracking focused on the duration and sequence of activity states. We combine Support Vector Machine (SVM) with Conditional Random Field (CRF) and extend Conditional Random fields by an explicit representation of duration. The SVM characterizes local activity states, whereas the CRF addresses sequences of local states to sequences incorporating spatial and non-spatial contextual knowledge. This combination provides a reliable and comprehensive identification of defined activity patterns, as well as their chronology and durations, suitable for the integration in an activity data base. This data base is used to extract physiological parameters and promises insights into internal states such as fitness, well-being and stress. Interestingly we were able to demonstrate a significant correlation between resting pulse rate and the day of pregnancy.  相似文献   
995.
996.
An important task of speaker verification is to generate speaker specific models and match an input speaker’s utterance with these models. This paper focuses on comparing the performance of text dependent speaker verification system using Mel Frequency Cepstral Coefficients feature and different Vector Quantization (VQ) based speaker modelling techniques to generate the speaker specific models. Speaker-specific information is mainly represented by spectral features and using these features we have developed the model which serves as an important entity for determining the claimed identity of the speaker. In the modelling part, we used Linde, Buzo, Gray (LBG) VQ, proposed adaptive LBG VQ and Fuzzy C Means (FCM) VQ for generating speaker specific model. The experimental results that are performed on microphonic database shows that accuracy significantly depends on the size of the codebook in all VQ techniques, and on FCM VQ accuracy also depend on the value of learning parameter of the objective function. Experiment results shows that how the accuracy of speaker verification system is depend on different representations of the codebook, different size of codebook in VQ modelling techniques and learning parameter in FCM VQ.  相似文献   
997.
In natural language processing, a crucial subsystem in a wide range of applications is a part-of-speech (POS) tagger, which labels (or classifies) unannotated words of natural language with POS labels corresponding to categories such as noun, verb or adjective. Mainstream approaches are generally corpus-based: a POS tagger learns from a corpus of pre-annotated data how to correctly tag unlabeled data. Presented here is a brief state-of-the-art account on POS tagging. POS tagging approaches make use of labeled corpus to train computational trained models. Several typical models of three kings of tagging are introduced in this article: rule-based tagging, statistical approaches and evolution algorithms. The advantages and the pitfalls of each typical tagging are discussed and analyzed. Some rule-based and stochastic methods have been successfully achieved accuracies of 93–96 %, while that of some evolution algorithms are about 96–97 %.  相似文献   
998.
Multi-stream automatic speech recognition (MS-ASR) has been confirmed to boost the recognition performance in noisy conditions. In this system, the generation and the fusion of the streams are the essential parts and need to be designed in such a way to reduce the effect of noise on the final decision. This paper shows how to improve the performance of the MS-ASR by targeting two questions; (1) How many streams are to be combined, and (2) how to combine them. First, we propose a novel approach based on stream reliability to select the number of streams to be fused. Second, a fusion method based on Parallel Hidden Markov Models is introduced. Applying the method on two datasets (TIMIT and RATS) with different noises, we show an improvement of MS-ASR.  相似文献   
999.
In this study, we wanted to discriminate between two groups of people. The database used in this study contains 20 patients with Parkinson’s disease and 20 healthy people. Three types of sustained vowels (/a/, /o/ and /u/) were recorded from each participant and then the analyses were done on these voice samples. Firstly, an initial feature vector extracted from time, frequency and cepstral domains. Then we used linear and nonlinear feature extraction techniques, principal component analysis (PCA), and nonlinear PCA. These techniques reduce the number of parameters and choose the most effective acoustic features used for classification. Support vector machine with its different kernel was used for classification. We obtained an accuracy up to 87.50 % for discrimination between PD patients and healthy people.  相似文献   
1000.
Efficient and effective processing of the distance-based join query (DJQ) is of great importance in spatial databases due to the wide area of applications that may address such queries (mapping, urban planning, transportation planning, resource management, etc.). The most representative and studied DJQs are the K Closest Pairs Query (KCPQ) and εDistance Join Query (εDJQ). These spatial queries involve two spatial data sets and a distance function to measure the degree of closeness, along with a given number of pairs in the final result (K) or a distance threshold (ε). In this paper, we propose four new plane-sweep-based algorithms for KCPQs and their extensions for εDJQs in the context of spatial databases, without the use of an index for any of the two disk-resident data sets (since, building and using indexes is not always in favor of processing performance). They employ a combination of plane-sweep algorithms and space partitioning techniques to join the data sets. Finally, we present results of an extensive experimental study, that compares the efficiency and effectiveness of the proposed algorithms for KCPQs and εDJQs. This performance study, conducted on medium and big spatial data sets (real and synthetic) validates that the proposed plane-sweep-based algorithms are very promising in terms of both efficient and effective measures, when neither inputs are indexed. Moreover, the best of the new algorithms is experimentally compared to the best algorithm that is based on the R-tree (a widely accepted access method), for KCPQs and εDJQs, using the same data sets. This comparison shows that the new algorithms outperform R-tree based algorithms, in most cases.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号